Questions 31 to 33

Explanations for questions 31 to 33

We'll cover the following

Question 31#

A web application is deployed in multiple regions behind an ELB Application Load Balancer. You need deterministic routing to the closest region and automatic failover. Traffic should traverse the AWS global network for consistent performance.

How can this be achieved?

  1. Configure AWS Global Accelerator, and configure the ALBs as targets.
  2. Place an EC2 Proxy in front of the ALB, and configure automatic failover.
  3. Create a Route 53 Alias record for each ALB, and configure a latency-based routing policy.
  4. Use a CloudFront distribution with multiple custom origins in each region, and configure for high availability.

Correct Answer: 1

Explanation: AWS Global Accelerator is a service that improves the availability and performance of applications with local or global users. You can configure the ALB as a target, and Global Accelerator will automatically route users to the closest point of presence.

Failover is automatic and does not rely on any client-side cache changes as the IP addresses for Global Accelerator are static anycast addresses. Global Accelerator also uses the AWS global network, which ensures consistent performance.

Automatic failover with AWS Global Accelerator

CORRECT: “Configure AWS Global Accelerator, and configure the ALBs as targets.” is the correct answer.

INCORRECT: “Place an EC2 Proxy in front of the ALB, and configure automatic failover.” is incorrect. Placing an EC2 proxy in front of the ALB does not meet the requirements. This solution does not ensure deterministic routing in the closest region, and failover is happening within a region that does not protect against regional failure. In addition, this introduces a potential bottleneck and lack of redundancy.

INCORRECT: “Create a Route 53 Alias record for each ALB, and configure a latency-based routing policy.” is incorrect. A Route 53 Alias record for each ALB with latency-based routing does provide routing based on latency and failover. However, the traffic will not traverse the AWS global network.

INCORRECT: “Use a CloudFront distribution with multiple custom origins in each region, and configure for high availability.” is incorrect. You can use CloudFront with multiple custom origins, and configure for HA. However, the traffic will not traverse the AWS global network.

References:

https://aws.amazon.com/global-accelerator/ https://aws.amazon.com/global-accelerator/faqs/ https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html

Question 32#

You are looking for a method to distribute onboarding videos to your company’s numerous remote workers around the world. The training videos are located in an S3 bucket that is not publicly accessible. Which of the options below would allow you to share the videos?

  1. Use CloudFront, and set the S3 bucket as an origin.
  2. Use a Route 53 Alias record that points to the S3 bucket.
  3. Use ElastiCache, and attach the S3 bucket as a cache origin.
  4. Use CloudFront, and a custom origin pointing to an EC2 instance.

Correct Answer: 1

Explanation: CloudFront uses origins which specify the origin of the files that the CDN will distribute. Origins can be either an S3 bucket, an EC2 instance, an Elastic Load Balancer, or Route 53. They can also be external (non-AWS). When using Amazon S3 as an origin, you place all of your objects within the bucket.

CloudFront with different origins

CORRECT: “Use CloudFront, and set the S3 bucket as an origin.” is the correct answer.

INCORRECT: “Use a Route 53 Alias record that points to the S3 bucket.” is incorrect. You cannot use a Route 53 Alias record to connect to an S3 bucket that is not publicly available.

INCORRECT: “Use ElastiCache, and attach the S3 bucket as a cache origin.” is incorrect. You cannot configure an origin with ElastiCache.

INCORRECT: “Use CloudFront, and a custom origin pointing to an EC2 instance.” is incorrect. You can configure a custom origin pointing to an EC2 instance, but as the training videos are located in an S3 bucket, this would not be helpful.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html

Question 33#

A client is in the design phase of developing an application that will process orders for their online ticketing system. The application will use a number of front-end EC2 instances that pick-up orders and place them in a queue for processing by another set of back-end EC2 instances. The client will have multiple options for customers to choose the level of service they want to pay for.

The client has asked how he can design the application to process the orders in a prioritized way based on the level of service the customer has chosen?

  1. Create multiple SQS queues, configure exactly-once processing, and set the maximum visibility timeout to 12 hours.
  2. Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested, and configure the back-end instances to sequentially poll the queues in order of priority.
  3. Create a combination of FIFO queues and Standard queues, and configure the applications to place messages into the relevant queue based on priority.
  4. Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority, and configure the back-end instances to poll the queue and pick up messages in the order in which they are presented.

Correct Answer: 2

Explanation: The best option is to create multiple queues and configure the application to place orders onto a specific queue based on the level of service. You then configure the back-end instances to poll these queues in order of priority, so they pick up the higher priority jobs first.

INCORRECT: “Create multiple SQS queues, configure exactly-once processing, and set the maximum visibility timeout to 12 hours.” is incorrect. Creating multiple SQS queues and configuring exactly-once processing (only possible with FIFO) would not ensure that the order of the messages is prioritized.

CORRECT: “Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested, and configure the back-end instances to sequentially poll the queues in order of priority.” is the correct answer.

INCORRECT: “Create a combination of FIFO queues and Standard queues, and configure the applications to place messages into the relevant queue based on priority.” is incorrect as creating a mixture of queue types is not the best way to separate the messages. In addition, there is nothing in this option that explains how the messages would be picked up in the right order.

INCORRECT: “Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority, and configure the back-end instances to poll the queue and pick up messages in the order in which they are presented.” is incorrect. This would not work as standard queues offer best-effort ordering, so there’s no guarantee that the messages would be picked up in the correct order.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html

Questions 28 to 30

Questions 34 to 36